Infection Control & Hospital Epidemiology
◐ Cambridge University Press (CUP)
Preprints posted in the last 30 days, ranked by how well they match Infection Control & Hospital Epidemiology's content profile, based on 17 papers previously published here. The average preprint has a 0.09% match score for this journal, so anything above that is already an above-average fit.
Mahfouz, M.; Alzaben, E.
Show abstract
BackgroundCanine impaction represents one of the most challenging clinical scenarios in orthodontic practice, with maxillary canines being the second most commonly impacted teeth after third molars. The management of impacted canines through orthodontic traction requires an advanced understanding of biomechanical principles, surgical techniques, and patient-specific factors. The decision to attempt traction must be informed by accurate differentiation between mechanical impaction and primary failure of eruption (PFE), as applying orthodontic force to PFE teeth results in failure and iatrogenic ankylosis. Recent systematic synthesis of eruption disorders further underscores the need to differentiate mechanical impaction from genetically mediated eruption failure prior to orthodontic traction [59]. In a companion systematic review, we have synthesized the evidence on genetic etiology and diagnostic accuracy for PFE. The present review focuses specifically on the management of confirmed mechanical impaction requiring orthodontic traction, providing a complete evidence-based framework for clinicians. ObjectiveTo provide the most comprehensive quantitative synthesis to date of orthodontic traction for impacted canines, encompassing biomechanical principles, comparative outcomes of open versus closed surgical exposure techniques, radiographic predictors of traction duration, complications, innovations, and evidence-based clinical recommendations with a practical decision algorithm. MethodsA systematic search of PubMed/MEDLINE and the Cochrane Library was conducted for studies published between January 2000 and February 2026, supplemented by citation tracking in Google Scholar. The PRISMA 2020 guidelines were followed. The protocol was prospectively registered on the Open Science Framework (DOI: 10.17605/OSF.IO/3UDH6). Eligible studies included randomized controlled trials, prospective cohort studies, retrospective cohort studies with at least 20 patients, case-control studies, systematic reviews, and meta-analyses. Risk of bias was assessed using ROBINS-I, RoB 2.0, and ROBIS tools. Meta-analyses employed random-effects models with Hartung-Knapp adjustment. Heterogeneity was assessed using I-squared and tau-squared statistics. Prediction intervals were calculated for meta-analyses with substantial heterogeneity. The GRADE framework evaluated evidence quality. Given the predominance of observational studies, pooled estimates should be interpreted as associations rather than causal effects. ResultsFrom 3,587 records, 94 studies (9,156 patients) met inclusion criteria. Optimal force magnitudes range from 50-150g, with force direction determined by the center of resistance located halfway along the root length. Meta-analyses demonstrated comparable success rates between open (91%, 95% CI: 88-94%) and closed (93%, 95% CI: 89-95%) surgical exposure techniques (9 studies; 3 RCTs, 6 observational; tau-squared = 0.00). Open exposure was associated with reduced traction duration (mean difference -4.7 months, 95% CI: -7.3 to -2.1; I-squared = 87%, tau-squared = 5.82; prediction interval -9.8 to 0.4 months) and lower ankylosis risk (OR 0.15, 95% CI: 0.03-0.83; I-squared = 0%, tau-squared = 0.00). Closed exposure was associated with reduced postoperative pain (mean difference -1.9 VAS, 95% CI: -2.6 to -1.2; I-squared = 0%, tau-squared = 0.00). Radiographic predictors include alpha-angle (beta = 0.16 months/degree), d-distance (beta = 1.20 months/mm), and sector location. Three-dimensional analysis demonstrates that cusp tip displacement explains approximately 55.4% of variance in traction duration. Complications include root resorption (23-48% of adjacent incisors; pooled MD 0.69 mm, 95% CI: 0.58-0.80 mm), alveolar bone loss (pooled MD 0.51 mm, 95% CI: 0.31-0.72 mm), and ankylosis (3.5-14.5%). GRADE evidence quality ranged from high (postoperative pain) to very low (acceleration modalities). Innovations: temporary anchorage devices (moderate-high, established); digital workflows (moderate, emerging); clear aligner-based traction (low, experimental); low-level laser therapy (low-moderate, adjunct only); vibration devices (high-quality negative evidence, not recommended). ConclusionsThis most comprehensive quantitative synthesis demonstrates that both open and closed surgical exposure techniques yield excellent success rates. Open exposure offers advantages in reduced traction duration and lower ankylosis risk, while closed exposure provides superior patient comfort. Radiographic predictors enable accurate pretreatment estimation of treatment duration. The findings of this review, combined with our companion analysis of the genetic and diagnostic basis of PFE [59], support a paradigm shift toward a genetically informed and mechanistically driven approach to all forms of failed tooth eruption. A practical clinical decision algorithm is provided to guide evidence-based management.
Okolo, C. C.; Amole, T. G.
Show abstract
Background The microbial aetiology of early childhood caries (ECC) in sub-Saharan African populations remains poorly characterised, with most studies focusing on conventional cariogenic pathogens like Streptococcus mutans. This study aimed to characterise the salivary microbial profile of children with ECC in urban Kano, northern Nigeria. Methods In this cross-sectional study of 162 children aged 3-5 years in urban Kano, unstimulated saliva samples were collected and analysed using standard bacteriological culture methods. Caries status was assessed using decayed, missing, and filled teeth (dmft) index and International Caries Detection and Assessment System (ICDAS). Microbial isolates were identified through Gram staining, colony morphology, and biochemical tests (catalase, coagulase, oxidase). Results Of 32 microbial isolates obtained, Staphylococcus aureus was the most prevalent (43.8%, n=14), followed by Streptococcus species (28.1%, n=9), Klebsiella species (12.5%, n=4), non-aureus staphylococci (6.3%, n=2), yeast (6.3%, n=2), and Pseudomonas species (3.1%, n=1). Only one isolate demonstrated direct association with dmft-detectable caries. Polymicrobial colonisation occurred in four cases (12.5%), predominantly featuring S. aureus-yeast combinations (n=2). White spot lesions (ICDAS 1-2) were associated with S. aureus and Klebsiella species in two separate cases. Conclusion This study reveals an unexpected predominance of S. aureus in the salivary microbiome of children in northern Nigeria, challenging conventional paradigms of ECC microbiology. The low correlation between microbial isolates and clinical caries suggests complex, multifactorial aetiology. These findings highlight the need for molecular characterisation of oral microbiomes in African populations and reconsideration of caries pathogenesis models in this unique epidemiological context.
Bjelovucic, R.; de Freitas, B. N.; Norholt, S. E.; Taneja, P.; Terp Hoybye, M.; Pauwels, R.
Show abstract
IntroductionDigital technologies are reshaping how health professionals are trained, and extended reality (XR) has gained attention as a tool for skills development in dental education. Yet, successful integration depends largely on educators perceptions, readiness, and working conditions. This study aimed to explore dental educators views of the educational value of XR, what barriers they experience, and how familiarity with immersive technologies relates to their use in teaching. Materials and MethodsA cross-sectional, web-based survey was conducted among dental educators. The questionnaire included items on demographics, familiarity and frequency of XR use, and perceptions of educational value, barriers, and curricular integration. Descriptive statistics were calculated, and Spearman correlation analyses were performed to explore associations between familiarity, use, and perceived benefits of XR. ResultsRespondents reported positive attitudes toward XR, particularly for improving students understanding of complex anatomy (mean = 6.02/7), skill development (5.68/7), and confidence and preparedness for clinical practice (5.08-5.20/7). XR was mainly viewed as a complement to traditional teaching rather than a replacement (mean = 3.77/7). Strong correlations were observed between perceived improvements in confidence, skills, and clinical readiness (r = 0.71 - 0.89, P < 0.0001). High costs, limited technical support, and time constraints were the most prominent barriers to usage. ConclusionOverall, dental educators appear open to XR but constrained by structural and organizational factors rather than a lack of interest. Faculty development, hands-on training opportunities, and institutional support may therefore be essential to translating positive perceptions into meaningful and sustained integration of immersive technologies in dental curricula.
Nagar, S. S.; Chandra, R. V.; Aileni, A. R.; Goud, V. S.
Show abstract
Aim and ObjectivesThe study aimed to evaluate the effectiveness of titanium inserts for interdental papilla reconstruction, comparing it with the Han and Takei technique using subepithelial connective tissue grafts. The objectives included assessing the black triangle height, papilla height and papilla presence index (PPI) at baseline, 1 month and 3 months postoperatively along with the evaluation of Early Wound Healing Score (EHS) during the first week of post operative healing period. Patients and MethodsThis single-blind randomized clinical trial included systemically healthy individuals aged 18-35 years with Nordland and Tarnows Class I-III papillary loss. A total of 18 participants were randomly assigned to either test group or control group. Clinical parameters were measured pre- and post-operatively at specified intervals. Both groups received standard presurgical care and postoperative follow-up. The surgical protocol for the test group involved titanium insert placement in the interdental bone, while the control group received a connective tissue graft using the Han and Takei method. ResultsBoth groups showed significant intragroup improvements in all parameters from baseline to 1 and 3 months (p<0.05). However, intergroup comparisons showed no significant differences at most time points, except at 3 months for PPI, where the control group showed significantly better results (p=0.04). EHS scores were not significant between the groups. ConclusionTitanium inserts and CTG both demonstrated clinical effectiveness in enhancing interdental papilla dimensions. These findings support the titanium insert as a viable, less invasive alternative, offering clinicians a practical option for esthetic papilla reconstruction.
Tesfaye Guteta, E.; Diriba, A.; Tesfaye, K.; Kedir, E.; Wakgari, M.; Jabessa, D.; Chali, M.; Biyena, K.; Sileshi, G.; Jobir, G.
Show abstract
From 2021 to 2025, MRSA emerged as a major multidrug-resistant pathogen in the study area. Among 545 S. aureus isolates, 67.2% were MRSA, disproportionately affecting children under five (26.5%) and males (55.5%). Case incidence more than doubled by 2025, suggesting rising transmission or resistance. Most isolates were hospital-associated (85.2%), predominantly from outpatients (88.5%), with middle ear discharge as the main source (67%). Gentamicin showed the highest susceptibility (72.1%), while penicillin G resistance was nearly universal (96.7%). The majority (93.4%) were multidrug-resistant, with high MARI values indicating widespread and likely inappropriate antibiotic use. These findings reflect a complex interplay between pathogen behavior, antimicrobial use, and healthcare practices. Increasing MRSA burden may stem from inadequate infection control, poor stewardship, or enhanced community transmission. Incorporating molecular typing could deepen understanding of strain diversity and resistance mechanisms to guide targeted interventions
Morisson, L.; Latreille, A.; Pietrancosta, M.; Djerroud, K.; Tanoubi, I.; Hemmerling, T.; Laferriere-Langlois, P.
Show abstract
Purpose To quantify and compare the peak force applied on the glottis during endotracheal intubation across five laryngoscopy techniques, two intubation conditions (standard and simulated laryngospasm), and two operator experience levels, and to assess the effects of stylet use and operator anthropometric characteristics on applied force. Methods This prospective, manikin-based experimental study enrolled 50 operators (30 experienced, 20 less experienced). Each performed endotracheal intubation using five techniques: direct laryngoscopy and videolaryngoscopy with a Macintosh blade, each with and without stylet, and videolaryngoscopy with a hyperangulated blade with stylet. A calibrated force sensor positioned at the glottis measured peak forces during standard and simulated laryngospasm conditions. Non-parametric statistical methods were used (Mann-Whitney U, Wilcoxon signed-rank, Friedman tests); effect sizes are reported as rank-biserial correlations. Results Across all techniques, median glottic forces ranged from 4.8 N (IQR: 3.3-6.5) for videolaryngoscopy without stylet to 11.1 N (IQR: 7.5-14.5) for direct laryngoscopy with stylet under standard conditions. No significant differences in applied force were observed between experienced and less experienced operators for any technique-condition combination (all adjusted p = 1.0; |r| < 0.27). Stylet use significantly increased glottic force across all conditions and groups (median increases 3.4-7.3 N; all p < 0.001; rank-biserial r > 0.75). Videolaryngoscopy with a Macintosh blade produced significantly lower forces than hyperangulated videolaryngoscopy under standard conditions (adjusted p = 0.049). Neither grip strength nor hand size correlated with applied force. Conclusion Glottic force during endotracheal intubation is determined primarily by technique and stylet use, not operator experience or anthropometrics. Stylet use is the single largest modifiable contributor to glottic force. These findings have implications for device selection, clinical training, and strategies to minimize airway trauma during intubation.
Nguyen Thi, K. A.; Paterson, D. L.; Mo, Y.; Ezure, Y.; Pham, D. T.; Thwaites, C. L.
Show abstract
BackgroundHospital-acquired bacterial pneumonia (HABP) and ventilator-associated bacterial pneumonia (VABP), particularly those caused by multi-drug resistant organisms (MDROs), often require newer antibiotic treatment. The efficacy and safety of newer antibiotics compared to generic antibiotics in randomized controlled trials (RCTs) have not been evaluated before. MethodsIn this systematic review, we searched RCTs in the United States National Library of Medicine (PubMed), Cochrane Central Register of Controlled Trials (CENTRAL), Scopus, Ovid MEDLINE, Clinical Trials.gov and Google Scholar databases published between 2013 and 2025. The primary efficacy endpoint was 28-day all-cause mortality. Secondary efficacy endpoints were clinical and microbiological response. Safety endpoint was nephrotoxicity. ResultsWe identified eight eligible RCTs involving 2,881 patients (1,450 patients treated with newer antibiotics and 1,431 patients treated with generic antibiotics) with HABP/VABP. The meta-analysis did not reveal any significant differences between newer and generic antibiotics for all-cause mortality at day 28 (risk ratio (RR) 0.97, 95% confidence interval (CI) 0.72-1.30), clinical response (RR 1.04, 95%CI 0.93-1.17), and microbiological response (RR 1.05, 95%CI 0.89-1.24). However, newer antibiotics showed significant lower occurrences of nephrotoxicity compared to colistin component (RR 0.30, 95%CI 0.11-0.79). In subgroup analysis, newer antibiotic regimens demonstrated significant improvement in microbiological eradication of carbapenem-resistant Gram-negative bacilli (RR 1.50, 95%CI 1.18-1.90). ConclusionsNewer antibiotics showed similar efficacy and safety in treating HABP/VABP compared to generic drugs. The superiority in microbiological eradication of carbapenem-resistant Gram-negative bacilli of newer antibiotics could suggest that future trials should be targeted for those patients to improve understanding of their therapeutic use and pathophysiology of these conditions. Key pointsNewer antibiotics, despite broader antimicrobial coverage, have not significantly outperformed generic comparators in terms of 28-day all-cause mortality, clinical, or microbiological response in patients with Gram-negative HABP/VABP. This may reflect limitations in current trial designs focused primarily on regulatory approval.
Kitutu, F. E.; Blaas, C.; Mukisa, P.; Schedwin, M.; Baker, T. B.; Bakare, A. A.; Bishit, D.; Mkumbo, E.; Oliwa, J.; Nzinga, J.; Namasopo, S.; Ruane, M.; Adeniji, A.; Hawkes, M.; Rai, A.; Njuguna, M.; Graham, H. R.; King, C.
Show abstract
BackgroundMedical oxygen is an essential medicine that is often unavailable for patients when they need it. We explored if Outsourced Oxygen to the Bedside (O2B) pilots, where private providers deliver a package of services, were successful in ensuring reliable oxygen access at the patient bedside. MethodsWe conducted a sequential explanatory mixed-methods assessment of O2B pilots in Kenya, Nigeria, India, Tanzania, and Uganda from September 2024 - January 2025. A quantitative cross-sectional facility audit described facility contexts, tested equipment functionality and assessed healthcare worker (HCW) oxygen knowledge. Qualitative interviews with HCWs and managers explored experiences of O2B pilots. ResultsWe studied 28 of the 80 facilities participating in the pilots, 179 HCWs completed the knowledge survey, and 59 qualitative interviews were conducted. In the audit, we found O2B provided oxygen equipment more functional and usable than non-O2B equipment: 49.0% vs 30.1% (p-value<0.001) for cylinders, 82.9% vs 20.3% (p-value<0.001) for concentrators, and 84.0% vs 70.0% (p-value=0.172) for pulse oximeters. Overall, 21.8% (39/179) of HCWs had received training from O2B providers, and their oxygen knowledge was slightly higher than those who had not (mean score 15.3/24 vs 13.9/24, p-value=0.002). Qualitative interviews highlighted positive changes in oxygen access and the ability to treat patients, but also mixed understandings of the O2B services being provided, and requests for additional services. ConclusionO2B pilots appear to improve medical oxygen access, with effective maintenance and repair services being a key mechanism. However, tailoring to local needs and remaining gaps in HCW capacity need to be addressed.
Mahfouz, M.; Alzaben, E.
Show abstract
BackgroundFailure of tooth eruption (FTE) encompasses mechanical impaction, primary failure of eruption (PFE), and syndromic disturbances. Since the seminal review by Suri et al. (2004), advances in genetics and surgical protocols warrant comprehensive synthesis. ObjectiveTo evaluate PTH1R mutation prevalence, diagnostic accuracy of clinical/radiographic criteria, comparative effectiveness of open versus closed surgical exposure for impacted canines, prognostic factors for supernumerary-associated eruptions, and management outcomes for PFE and syndromic disorders across six domains. MethodsPubMed/MEDLINE, Cochrane Library, and Google Scholar were searched (January 2004-February 2026). To enhance reproducibility, databases with broad public accessibility were prioritized. Google Scholar was used only for citation tracking and not as a primary database to minimize algorithmic bias and irreproducibility. PRISMA 2020 guidelines were followed. Protocol registered on OSF (DOI: 10.17605/OSF.IO/R5X76). Inclusion criteria: RCTs, cohort, case-control, and diagnostic accuracy studies. Genetic testing was considered the highest reference standard for diagnostic accuracy. Risk of bias assessed using ROBINS-I, QUADAS-2, and RoB 2.0. Meta-analyses used random-effects models with Hartung-Knapp adjustment. Heterogeneity was assessed using I{superscript 2} statistics, with sources explored through subgroup analyses, meta-regression, and prognostic factor analysis. GRADE evaluated evidence quality. Forest plots and funnel plots are provided in Figures 3-8 and Supplementary Figures S1-S15. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=126 SRC="FIGDIR/small/26346646v1_fig3.gif" ALT="Figure 3"> View larger version (10K): org.highwire.dtl.DTLVardef@1d71b0forg.highwire.dtl.DTLVardef@1318309org.highwire.dtl.DTLVardef@1920208org.highwire.dtl.DTLVardef@c36c6f_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 3:C_FLOATNO Forest Plot - Treatment Duration Difference (Closed vs. Open Exposure). Forest plot comparing total treatment duration (months from exposure to final alignment) between closed and open surgical exposure techniques for impacted maxillary canines (Domain 3). Data from 8 studies comprising 1,287 canines. Closed exposure was associated with significantly shorter treatment duration (mean difference -4.7 months; 95% CI: -7.3 to -2.1; p < 0.001). Heterogeneity was moderate to high (I{superscript 2} = 64.1%), partially explained by study design in meta-regression (RCTs vs. cohorts, p = 0.04). The 95% prediction interval (-9.8 to 0.4 months) indicates the range within which the true effect in a future study would fall, supporting individualized technique selection. All eight studies favored closed exposure, though confidence intervals for three cohort studies crossed zero. Study weights ranged from 4.0% to 18.2%. RCTs (Parkin 2013, Bazargani 2019, Smailiene 2020, Chaushu 2021) showed slightly larger effect sizes (range: -3.8 to -6.1 months) compared to cohort studies (Becker 2010, Fleming 2015, Kokich 2012, Zuccati 2018; range: -3.2 to -6.4 months). Diamond represents pooled estimate; squares represent individual study weights with horizontal lines indicating 95% confidence intervals. C_FIG O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=142 SRC="FIGDIR/small/26346646v1_fig8.gif" ALT="Figure 8"> View larger version (40K): org.highwire.dtl.DTLVardef@42959org.highwire.dtl.DTLVardef@136c662org.highwire.dtl.DTLVardef@11a59e3org.highwire.dtl.DTLVardef@1035b2a_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 8:C_FLOATNO Forest Plot - Spontaneous Eruption After Supernumerary Removal. Forest plot of spontaneous eruption rates after supernumerary removal alone from 12 studies (1,456 patients) across Domain 4. Reported rates ranged from 48% to 68% across studies (I{superscript 2} = 71.2%). High heterogeneity reflects differences in patient age (deciduous vs. mixed vs. permanent dentition), supernumerary morphology (conical vs. tuberculate), timing of intervention, supernumerary position (palatal vs. labial vs. between roots), tooth type affected (central incisor most common), and follow-up duration (range 1-5 years). With adjunctive orthodontic measures (space creation, traction, or both), success rates increased to 81-90% across 8 studies (892 patients). Study weights ranged from 8.4% to 8.9%. Prognostic factor analysis (Table 6) identified favorable factors including removal during deciduous dentition (OR 2.5-5.5), conical supernumerary morphology (OR 3.0-6.5), and incomplete root formation of the permanent incisor (OR 2.5-5.0). Unfavorable factors included tuberculate morphology (OR 0.2-0.4) and complete root formation (OR 0.2-0.5). Diamond represents pooled estimate; squares represent individual study estimates with horizontal lines indicating 95% confidence intervals. C_FIG ResultsFrom 3,587 records, 94 studies (9,156 patients) were included across six domains. Overall certainty of evidence ranged from low to moderate due to observational designs and heterogeneity. Domain 1 (Genetic Basis): PTH1R mutation prevalence in PFE ranged from 52-90% (16 studies, 487 patients; I{superscript 2} = 68%; Figure 6). Heterogeneity reflected differences in familial vs. sporadic cases and referral bias. Population-level prevalence remains unknown. Sixty-three variants identified. Domain 2 (Diagnostic Accuracy): "Failure to respond to orthodontic force" showed sensitivity 94% (95% CI: 91-97%) and specificity 96% (93-98%). "Progressive posterior open bite" showed sensitivity 92% (88-95%) and specificity 89% (84-92%). Reference standard heterogeneity (I{superscript 2} = 45-65%) addressed through bivariate and HSROC models. CBCT provided superior root resorption detection (97% vs. 68%; p < 0.001). Domain 3 (Canine Impaction): Open (91% [88-94%]) and closed (93% [89-95%]) exposure achieved comparable success (I{superscript 2} = 52%). Closed exposure was associated with shorter treatment duration (mean difference -4.7 months [-7.3 to -2.1]; I{superscript 2} = 64%; Figure 3) and lower postoperative pain (-1.9 VAS [-2.6 to -1.2]; I{superscript 2} = 58%; Figure 4). Prediction intervals (-9.8 to 0.4 months) support individualized technique selection. Funnel plots showed no significant publication bias (Figure 7). Domain 4 (Supernumerary): Spontaneous eruption after removal alone: 48-68% (I{superscript 2} = 71%; Figure 8); with adjunctive orthodontics: 81-90%. Heterogeneity reflected patient age, supernumerary morphology, and timing of intervention. Favorable factors: deciduous removal (OR 2.5-5.5), conical morphology (OR 3.0-6.5), incomplete root formation (OR 2.5-5.0). Domain 5 (PFE Management): Orthodontic force application failed in 88-98% and caused adjacent tooth ankylosis in 25-50%. Prosthodontic rehabilitation achieved functional occlusion in 82-94%. Implant success: 85-95%. Meta-analysis not performed due to critical heterogeneity. Domain 6 (Syndromic): Cleidocranial dysplasia alignment: 61-75%. Osteopetrosis extraction-associated osteomyelitis: 33%, favoring conservative management. Narrative synthesis only. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=148 SRC="FIGDIR/small/26346646v1_fig6.gif" ALT="Figure 6"> View larger version (40K): org.highwire.dtl.DTLVardef@15622eborg.highwire.dtl.DTLVardef@e7403org.highwire.dtl.DTLVardef@e27724org.highwire.dtl.DTLVardef@1fbe10a_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 6:C_FLOATNO Forest Plot - PTH1R Mutation Prevalence. Forest plot of PTH1R mutation prevalence in clinically diagnosed primary failure of eruption (PFE) from 16 studies (487 patients) across Domain 1. The reported prevalence varied substantially across studies, ranging from 52% to 90% (I{superscript 2} = 68%). Heterogeneity reflects differences in diagnostic criteria, patient selection (familial vs. sporadic cases), and referral bias. Subgroup analysis showed higher prevalence in familial cases (range 79-92%; 9 studies) compared to sporadic cases (range 54-71%; 12 studies). Meta-regression showed no significant association with geographic region, mutation detection method, or year of publication (p > 0.05 for all). Trim-and-fill analysis suggested one potentially missing study with negligible impact on pooled prevalence. Study weights ranged from 5.7% to 6.8%. The most frequently reported studies include Frazier-Bowers 2010 (0.75, 95% CI: 0.58-0.87), Risom 2013 (0.82, 95% CI: 0.66-0.92), and Park 2025 (0.89, 95% CI: 0.74-0.96). Reported estimates should not be extrapolated to unselected clinical populations; population-level prevalence remains unknown. Diamond represents pooled estimate; squares represent individual study estimates with horizontal lines indicating 95% confidence intervals. C_FIG O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=147 SRC="FIGDIR/small/26346646v1_fig4.gif" ALT="Figure 4"> View larger version (17K): org.highwire.dtl.DTLVardef@1737e7forg.highwire.dtl.DTLVardef@175c6a4org.highwire.dtl.DTLVardef@1446af8org.highwire.dtl.DTLVardef@caff01_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 4:C_FLOATNO Forest Plot - Postoperative Pain Difference (Closed vs. Open Exposure). Forest plot comparing postoperative pain scores (visual analog scale, VAS 0-10 at 24-48 hours) between closed and open surgical exposure techniques for impacted maxillary canines (Domain 3). Data from 5 studies comprising 842 patients. Closed exposure was associated with significantly lower pain scores (mean difference -1.9; 95% CI: -2.6 to -1.2; p < 0.001). Heterogeneity was moderate (I{superscript 2} = 58.2%), reflecting differences in pain measurement timing (24h vs. 48h), analgesic protocols, and study design (RCT vs. cohort). The consistent direction of effect across all studies supports robustness of findings. All five studies favored closed exposure for reduced postoperative pain. Study weights ranged from 17.5% to 22.4%. RCTs (Parkin 2013, Bazargani 2019, Chaushu 2021) showed slightly larger effect sizes (range: -1.8 to -2.4) compared to cohort studies (Becker 2010, Fleming 2015; range: -1.2 to -1.6). Diamond represents pooled estimate; squares represent individual study weights with horizontal lines indicating 95% confidence intervals. C_FIG O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=114 SRC="FIGDIR/small/26346646v1_fig7.gif" ALT="Figure 7"> View larger version (29K): org.highwire.dtl.DTLVardef@12bbffdorg.highwire.dtl.DTLVardef@1497eb8org.highwire.dtl.DTLVardef@1e879eorg.highwire.dtl.DTLVardef@59d3ae_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 7:C_FLOATNO Funnel Plot - Publication Bias for Canine Studies. Funnel plot assessing publication bias for 7 studies comparing treatment duration between open and closed surgical exposure for impacted maxillary canines (Domain 3). The plot appears reasonably symmetrical, with studies distributed evenly around the pooled estimate. Eggers test was non-significant (p = 0.38), suggesting no strong evidence of publication bias for this outcome. Each circle represents an individual study. The funnel shape represents the pseudo 95% confidence interval limits. The symmetrical distribution indicates that small and large studies are similarly distributed around the pooled effect estimate, supporting the robustness of the finding that closed exposure is associated with shorter treatment duration (mean difference -4.7 months; 95% CI: -7.3 to -2.1). The absence of publication bias strengthens confidence in the meta-analytic findings for this outcome. C_FIG ConclusionsThese findings support a paradigm shift toward genetically informed orthodontic decision-making across six integrated domains. PTH1R mutations are frequently reported in PFE, though population prevalence remains unknown. Open and closed canine exposure techniques have comparable success; closed exposure offers advantages in comfort and treatment duration. Early supernumerary intervention improves outcomes. Heterogeneity across domains reflects clinical diversity and was addressed through appropriate statistical methods. Orthodontic forces should be avoided in confirmed PFE. RegistrationOpen Science Framework (DOI: 10.17605/OSF.IO/R5X76)
Born, G.
Show abstract
BackgroundBehavioral telemetry--the analysis of clinical actions NOT taken--may identify care process failures associated with adverse outcomes. While missed nursing care predicts outcomes in survey-based studies, objective EHR-derived measures are lacking. We hypothesized that missing routine cognitive assessment in ICU patients with low acute physiologic derangement would predict mortality independent of illness severity. MethodsRetrospective cohort study using MIMIC-IV (2008-2022, Beth Israel Deaconess Medical Center) with external assessment of documentation practices in eICU (208 US hospitals). We identified ICU admissions with SOFA 0-2 (low acute physiologic derangement), excluding neurological ICUs. Orientation documentation was classified within 24 hours. Primary outcome was in-hospital mortality. Multivariable logistic regression adjusted for age, sex, SOFA, and Charlson Index. ResultsAmong 46,004 ICU patients with SOFA 0-2, 4,737 (10.3%) had no orientation documentation within 24 hours. These patients had 24.68% mortality versus 7.57% early-assessed and 4.56% late-assessed. After adjustment, missing orientation was associated with 4.29-fold higher odds of death (95% CI 3.95-4.65; E-value 8.0). In SOFA=0 patients (N=23,670), the signal strengthened (OR 5.65, 95% CI 5.03-6.35; E-value 10.8). Late-assessed patients had the LOWEST mortality (OR 0.65), arguing against reverse causation. Patients without orientation had 22% MORE chart events (1,600 vs 1,309), arguing against neglect. External assessment revealed that among 166 eICU hospitals with [≥]100 eligible patients, only 5% documented orientation routinely--92% lack the infrastructure to detect this signal. ConclusionsIn ICU patients with low acute physiologic derangement, absence of orientation assessment is associated with 4-6 fold increased mortality. This association may identify care process failures not captured by severity scores, though prospective studies are needed to establish causality. Key PointsO_ST_ABSQuestionC_ST_ABSDoes absence of routine orientation assessment predict mortality in ICU patients with low acute physiologic derangement (SOFA 0-2), independent of illness severity? FindingsIn this cohort study of 46,004 ICU patients with SOFA 0-2, those without orientation documentation within 24 hours had 4.29-fold higher adjusted odds of death (95% CI 3.95-4.65). In SOFA=0 patients, the signal strengthened to OR 5.65 (E-value 10.8). Patients assessed late (6-24h) had the LOWEST mortality (OR 0.65), arguing against reverse causation. Among 166 eICU hospitals, only 5% document orientation routinely--92% lack the infrastructure to detect this signal. MeaningMissing routine cognitive assessment may identify care process failures associated with increased mortality. The finding that 92% of US ICUs lack the documentation infrastructure to detect this signal reveals a systemic gap in care process monitoring. What is Already Known on This TopicMissed nursing care--care omissions--predicts patient mortality in survey-based studies. Nurse staffing ratios are associated with mortality, but the mechanism is poorly understood. No objective, EHR-derived measure exists to detect care process omissions in real time. What This Study AddsFirst EHR-based operationalization of the missed nursing care construct, enabling objective, real-time detection. Missing orientation assessment associated with 4-6 fold increased mortality (OR 4.29 in SOFA 0-2; OR 5.65 in SOFA=0). Signal strengthens in SOFA=0 patients (E-value 10.8), suggesting finding is not driven by acute illness severity. Argues against reverse causation: late assessment has BETTER outcomes than early or no assessment. Argues against neglect: patients without assessment had MORE documentation, not less. Argues against immortal time bias: Never Documented patients had LONGER ICU stays (7.58 vs 3.09 days). Quantifies association: 10.3% of patients account for 27.2% of deaths. Reveals systemic gap: 92% of US ICUs lack the documentation infrastructure to detect this signal.
Yoshida, H.; Adelman, M. W.; Rasmy, L.; Ifiora, F.; Xie, Z.; Perez, M. A.; Guerra, F.; Yoshimura, H.; Jones, S. L.; Arias, C. A.; Zhi, D.; Nigo, M.
Show abstract
BackgroundCandidemia is a rare but life-threatening bloodstream infection that remains difficult to predict using conventional risk stratification approaches, highlighting the need for improved predictive strategies. As a result, empiric antifungal therapy is often delayed even in high-risk patients. MethodsWe developed a deep learning model (PyTorch_EHR) to predict 7-day candidemia risk by using electronic health record data from two large cohorts (Houston Methodist Hospital System [HMHS] and MIMIC-IV), including adult inpatients who underwent at least one blood culture. Model performance was compared with logistic regression (LR), LightGBM, and established intensive care unit candidemia scores. We further implemented a two-step prediction framework integrating candidemia and 30-day mortality risk models to inform empiric antifungal decision-making. ResultsAmong 213,404 and 107,507 patients in the HMHS and MIMIC-IV cohorts, candidemia occurred in fewer than 1% (851 [0.4%] and 634 [0.6%], respectively). PyTorch_EHR outperformed LR, LightGBM, and existing candidemia scores, particularly in terms of area under the precision-recall curve (AUPRC) in HMHS and MIMIC-IV. By integrating 30-day mortality risk, the two-step framework identified an additional 20 and 28 candidemia cases beyond the one-step model, increasing coverage to 61% (121/199) and 46% (68/147) in HMHS and MIMIC-IV, respectively. Many patients identified by the two-step framework had high mortality yet did not receive empiric antifungal therapy (61.1% HMHS; 82.6% MIMIC-IV). ConclusionA two-step deep-learning framework integrating candidemia and mortality risk may support early identification of high-risk patients and facilitate timely empiric antifungal therapy. Prospective studies are warranted to confirm the findings.
Menif, B.; Wirth, S. E.; Wroblewski, D.; Connors, J.; Correa, N.; Delaney, M. L.; Bry, L.
Show abstract
BackgroundClostridium perfringens can cause life-threatening extraintestinal infections in immunocompromised patients, an area in which we have little information regarding strain factors that impact patient risks and outcomes. MethodsWe conducted genomic-epidemiologic analyses on C. perfringens isolates from 70 patients seen at Brigham and Womens Hospital over 2021-2024. Genomic analyses evaluated strain profiles within a broader context of 2,321 C. perfringens genomes from foodborne, veterinary, clinical, and environmental sources to identify factors associated with invasive infections. ResultsOf 70 patients with C. perfringens infections (mean age 67.6 years), 32 had invasive infections, of which two-thirds had active malignancies, and more than half were immunocompromised. Patients with invasive infections had a significantly higher 90-day mortality of 43.8% vs. 18.4% (p=0.035) and a higher median Charlson Comorbidity Index (6 vs. 3; p=0.003). Notably, no patient isolates were clonal, verifying the absence of hospital-based transmission. Patient isolates showed increased carriage of hyaluronidases (nagHIJKL), sialidases (nanIJ), and perfringolysin O (pfoA). Genomic-epidemiologic analyses identified a new independent association between the NagL hyaluronidase (OR 3.90, 95% CI 1.14 - 16.24) in highly morbid invasive infections. ConclusionWe present a comprehensive genomic analysis of C. perfringens and of strains infecting immunocompromised patients, including epidemiologic associations of the hyluronidase NagL, NanIJ sialidase, and perfringolysin O in highly morbid invasive infections. These genes provide potential markers to identify high morbidity strains that can infect these populations and to further elucidate their role in invasive infections.
Dubey, A. K.; Reyes, J.; Rhiner, C.; Drescher, K.; Dunkel, J.; McKinney, J. D.; Egli, A.
Show abstract
ObjectivesTo quantify how urine sample type and polymicrobial context impact antimicrobial resistance (AMR) in urinary tract infections (UTIs), using routine diagnostics at scale. MethodsIn this retrospective, single-centre study, we analysed 188,687 urine cultures from the Institute of Medical Microbiology, University of Zurich, Switzerland (January 2015 to May 2023). We compared midstream urine (MU), indwelling catheter (IDC), and intermittent catheter (IMC) samples. Samples were classified as negative, bacteriuria, or UTI, by meeting a microbiological UTI threshold ([≥]105 CFU/mL). We compared sample types using covariate-adjusted regression and constrained ordination for community composition. In bimicrobial cultures, we assessed co-occurrence using adjusted pairwise odds ratios and degree-preserving permutation null models, supported by partner-choice analyses. AMR was modelled as acquired resistance (AR) and total resistance (TR: acquired + intrinsic) probabilities, with predictor contributions quantified using mutual information. ResultsAmong 186,819 MU, IMC, IDC samples, 56,867 met the UTI threshold. Catheter-associated UTIs (IDC and IMC) were ~60% more likely to be polymicrobial than MU samples. Community composition differed by sample type (p<0{middle dot}001). In IDC, Escherichia coli was less prevalent than in MU, but device-associated pathogens like Pseudomonas aeruginosa and Candida albicans were enriched. Most species-pairs showed no increased co-occurrence after adjusting for covariates, but a subset showed reproducible enrichment across methods (e.g., C. albicans-C. glabrata). Organism identity was the dominant determinant of AMR, with the highest mutual information across AR and TR. AR was higher in IDC for common uropathogens (e.g., E. coli). Co-isolation with hospital-associated partners (e.g., Enterococcus faecium) was associated with further AR increase. From 2015 to 2023, AR increased from ~48% to ~60%, with rising {beta}-lactam (+{beta}-lactamase inhibitor) resistance and declining fluoroquinolone resistance in Enterobacterales. ConclusionsSample type and co-isolated partners provide clinically actionable information beyond pathogen identity and could support more context-aware reporting and empiric prescribing.
Lettner, J. D.; Matskevich, P.; Focke, C.; Chikhladze, S.; Fichtner-Feigl, S.; Utzolino, S.; Ruess, D. A.
Show abstract
BackgroundPreoperative biliary stenting alters biliary colonization and may reduce the effectiveness of perioperative antibiotic prophylaxis in pancreatoduodenectomy. Although broader-spectrum regimens have been associated with improved infectious outcomes, their microbiological adequacy in routine clinical practice remains poorly defined. We therefore evaluated the real-world adequacy of a prolonged ampicillin-sulbactam protocol, its association with infectious outcomes and survival, and the potential impact of a universal piperacillin-tazobactam strategy. MethodsWe analyzed all consecutive patients who underwent elective pancreatoduodenectomy from 2002 to 2023 at our tertiary center. Demographic, operative, microbiological, and outcome data were retrieved from a prospectively maintained database. Patients were stratified by stent status. Adequacy of prophylaxis was defined as the full in vitro susceptibility of all bile isolates. The outcomes included 30-day infectious morbidity, clinically relevant POPF, PPH, DGE, reoperation, 30- and 90-day mortality and long-term survival. A coverage simulation was performed to compare ampicillin-sulbactam with a hypothetical universal piperacillin-tazobactam. Statistical methods included chi-square/Fishers exact tests, Mann-Whitney U tests, Cox models, McNemars test and Poisson regression. ResultsOf 956 patients, 424 (44%) had a biliary stent. Technical complications were comparable between groups, and rates of POPF and PPH were not increased. However, infectious morbidity was higher in stented patients, including sepsis (RR 1.62, 95% CI 1.05-2.51) and postoperative cholangitis (RR 2.20, 95% CI 1.36-3.56). Thirty- and 90-day mortality were increased (RR 2.88 and 2.73) but lost significance after adjustment. Bile cultures predominantly yielded Enterococcus and Enterobacterales with low ampicillin-sulbactam susceptibility. Overall adequacy was 21.7%. Among patients with bile cultures (n = 474), ampicillin-sulbactam covered 43.7% (207/474) versus 81.2% (385/474) with piperacillin-tazobactam; in stented patients with cultures (n = 397), coverage increased from 41.8% to 78.1%. Adequate ampicillin-sulbactam coverage was not associated with reduced infectious outcomes in Poisson models. ConclusionPreoperative stenting creates a polymicrobial, partially resistant biliary niche that ampicillin-sulbactam does not sufficiently cover. Our data shows that a piperacillin-tazobactam strategy substantially improves coverage and was therefore implemented at our center. Core message- Stented patients exhibit a distinct infectious risk profile characterized by Enterococcus-and Enterobacterales-dominated bile colonization rather than increased rates of technical complications. - In stented patients, real-world microbiological coverage of ampicillin-sulbactam was limited, and in vitro susceptibility did not independently translate into reduced postoperative infectious morbidity. - Broader prophylaxis, such as piperacillin/tazobactam, aligns with the actual flora and nearly doubles theoretical coverage, addressing the mismatch between stent-associated biofilms and narrow regimens.
muhaildin, A. j.; M.Hussein, A.; Faraj, R. K.
Show abstract
BackgroundThe never-ending emergence of superbugs casts a shadow over the victorious age of antibiotics. In fact, the triumph of antibiotics was previously viewed in retrospection as our final victory over bacteria. Bacteria like Klebsiella pneumoniae, Acinetobacter baumannii, and Escherichia coli are now raising an alarming number of infections across hospitals and communities around the globe. The objective was to evaluate the implications for antimicrobial stewardship based on identifying the antibiotic resistance profiles, genotype mechanisms, and trends in common pathogenic bacteria found in various hospitals across Iraq. MethodsWe used a two-fold approach that was comprehensive in scope and involved both efficient multicenter surveillance as well as cutting edge genetic analysis to unravel the complex topography of antibiotic resistance. We provided a geographically heterogeneous but diverse set of clinically obtained isolates to participate in hospitals for a period of 24 months and concentrated our efforts on prioritized pathogens K. pneumoniae, A. baumannii, E. coli, P. aeruginosa, and S. aureus that are well known to pose serious threats. Beginning with clinically obtained isolates sourced across the entire globe, we used standardized techniques such as broth microdilution to first undertake phenotyping in a central reference lab to establish microbial identity based on resistance phenotypes to a set of prioritized antibiotics that include carbapenems, third generation cephalosporins, or fluoroquinolones. Finally, we derived data concerning the emergence patterns and geographic distribution of resistant microbes such as MRSA or CRE. We used genome-wide sequencing to unlock information concerning the genetic blueprints for a set of specifically chosen isolates based on their representational diversity across geographic locales, resistance phenotypes, and specific times. ResultsThe sample was made up of Escherichia coli (n = 225), Klebsiella pneumoniae (n = 185), Staphylococcus aureus (n = 135), Pseudomonas aeruginosa (n= 90), and Acinetobacter baumannii (n = 125). Ceftriaxone resistance was found in 80.4% of E. Coli, ciprofloxacin resistance in 45.6%, and meropenem resistance in 15.1%. K. pneumoniae exhibited 38.9% resistance to aminoglycosides and 70.2% resistance to carbapenems. The percentage of MRSA in S. aureus was 55.5%. P. aeruginosa showed 22.2% resistance to colistin, 37.8% resistance to piperacillin tazobactam, and 50.0% resistance to ceftazidime. Imipenem resistance was found in 85.6% of A. baumannii isolates, whereas colistin resistance was found in 28.8% of isolates. In all, 3.4% of isolates are pan-drug-resistant (PDR), 14.6% are extensively drug-resistant (XDR), and 52.1% are multidrug-resistant (MDR). WGS identified common genes such bla_NDM-1, bla_OXA-48, mcr-1, aac (6)-Ib, and plasmid replicons IncF, IncL/M, and IncI2. Carbapenem resistance in Gram-negative bacteria rose by around 18% over the course of five years. ConclusionsThis study shows that the rapid spread of complex genetic information in bacteria causes antibiotic resistance problems. High-level resistance represents an expected consequence of the spread of resistance genes and successful bacteria within healthcare systems. We demonstrate in our results that our expertise in overcoming resistance at a molecular level will play a crucial role in combating infectious diseases in the coming years.
Butzler, M.; Reed, J.; Olson, A.; Wood, R.; Cangelosi, G. A.; Luabeya, A. K.; Hatherill, M.; Chiwaya, A. M.; Rockman, L.; Theron, G.; McFall, S. M.
Show abstract
Mycobacterium tuberculosis (MTB) disease is a major global health threat with most tuberculosis (TB) cases occurring in low-and middle-income countries (LMIC) with limited healthcare infrastructure. Near-point-of-care testing which can be deployed at peripheral clinical settings is needed to start treatment earlier and thereby improve treatment outcomes. Here we report the development and preliminary characterization of an MTB detection assay that utilizes tongue swab or sputum specimens for The DASH(R) Rapid PCR System which employs cartridge-based automated sequence specific capture sample prep combined with dual target qPCR multicopy MTB insertion sequences IS6110 and IS1081 amplification and detection. MTB is resistant to conventional bacterial lysis techniques; therefore, we evaluated two pre-cartridge lysing techniques, mechanical lysis and sonication, and selected sonication for all subsequent studies. The DASH MTB assay demonstrated a limit of detection of 2.5 MTB cells/swab with no detection of 10 non-tuberculosis Mycobacterium strains. Clinical testing of 100 (49 positive and 51 negative) de-identified blinded sputa from South African symptomatic clinic attendees yielded an overall test sensitivity of 96% (100% for smear positive samples and 88% for smear negative samples) and specificity of 88% when compared to sputum culture. In a separate study of 110 tongue swab specimens (70 positive and 40 negative) from South African symptomatic clinic attendees, the sensitivity was 93% and the specificity was 100%. We further demonstrated that the test is compatible with peripheral LMIC settings via external battery operation and cartridge stability at 45{degrees}C for up to one year. ImportanceTuberculosis (TB) is the single most deadly infectious disease with 1.23 million deaths in 2024. Near-point-of-care testing which can be deployed at peripheral settings that lack laboratory infrastructure to deliver prompt and accurate diagnosis is needed to start treatment earlier and thereby improve treatment outcomes. In this study, we have developed an automated test to detect Mycobacterium tuberculosis (MTB), the cause of TB, from sputum and tongue swab specimens. Its high sensitivity and specificity, rapid time to result, and compatibility with environments that lack air conditioning and consistent electricity make this assay suitable for diverse clinical settings.
Nordan, A. G.; Ward, I.; Stancil, M. L.; Schmale, G.; Bodner, G.
Show abstract
BackgroundThe physician assistant (PA) workforce has expanded rapidly in the United States, increasing the importance of effective physician-PA collaboration. Although PAs improve patient outcomes and access to care, the determinants of effective collaboration has not been well studied. North Carolina provides a relevant context due to its growing PA workforce and supervisory regulatory structure, in which physicians retain administrative responsibility for PA supervision across practice settings. This study examines determinants of effective physician-PA collaboration in ambulatory care settings in North Carolina. MethodsFour virtual focus groups were conducted with practicing physicians (n=7) and PAs (n=9) across multiple specialties in NC. Transcripts were analyzed using thematic analysis to identify facilitators and barriers to collaboration. ResultsThematic analysis identified six major themes reflecting relational, organizational, and systemic influences on teamwork. Findings demonstrate collaboration evolves over time through early-career mentorship, continuity of working relationships, and progressive trust development. Differences in professional identity, power dynamics, and misunderstanding of PA scope of practice influenced autonomy and delegation. Systemic factors such as reimbursement structures and organizational supervisory policies hindered efficient teamwork. LimitationsFindings are based on a small, purposive sample within a single state and may not be generalizable to all ambulatory settings or regulatory environments. Perspectives may also reflect self-selection bias among participants with strong views on collaboration. ConclusionsEffective physician-PA collaboration depends on intentional onboarding, role clarity, interprofessional education, and alignment of organizational policies with regulatory standards to support team-based care.
Aborisade, A.; Ali, A. M.; Amedari, M.; Salako, A. O.; Akinsolu, F. T.; Abodunrin, O. R.; Ola, O. M.; Olagunju, M. T.; Eleje, G. U.; Lusher, J.; Ezechi, O. C.; Folayan, M. O.
Show abstract
BackgroundThe use of fluoride-containing dentifrices can reduce the risk of dental caries. The systematic review was conducted to address two research questions: (i) the prevalence and frequency of fluoridated toothpaste use among Nigerian children and adolescents across geographic and demographic settings, and (ii) its association with dental caries prevalence, stratified by location and baseline caries risk. MethodsThis systematic review, registered with PROSPERO (CRD42022362116), followed the PRISMA guidelines. A PIO framework was applied to include children and adolescents (6 months-19 years) in Nigeria using fluoridated toothpaste, with caries outcomes measured via dmft/DMFT indices. A comprehensive search of PubMed, Web of Science, Scopus, Embase, AJOL, and Google Scholar was conducted from January 2001 to January 2026, supplemented by reference and grey literature searches. Study selection, data extraction, and risk of bias assessment using an adapted Hoy et al. tool were performed independently by multiple reviewers, with high inter-rater reliability (Kappa=0.90). Data were pooled using a random-effects model, with sensitivity, subgroup, and meta-regression analyses conducted to explore heterogeneity and effect modifiers. Publication bias was assessed using funnel plots and Eggers test. ResultsOf 1,194 identified records, 18 studies (n=12,719 participants) were included. The use of fluoridated toothpaste was widespread (prevalence: 61.9% to 95.8%), yet its association with dental caries varied significantly by location. A meta-analysis of 14 studies indicated a significant 16% reduction in caries odds with fluoridated toothpaste use after removal of an influential outlier (OR = 0.84, 95% CI: 0.71-0.99, p=0.04). Subgroup analyses revealed this protective association was significant in urban and rural settings (p<0.05) but absent in suburban Nigeria. Furthermore, dental caries prevalence and severity (DMFT/dmft) were substantially higher in urban and rural areas, where the association was significant, compared to suburban regions. All studies were assessed as having a low risk of bias, and no significant publication bias was detected. ConclusionFluoridated toothpaste is widely used in Nigeria and associated with a reduction in the prevalence of dental caries in Nigeria. It appears the relationship is moderated by residential location, and the DMFT/dmft. Longitudinal studies are needed to explore the interactions between the DMFT/dmft, use of fluoridated toothpaste, and residential location in Nigeria.
Gilboa, M.; Barda, N.; Weiss-Ottolenghi, Y.; Canetti, M.; Peretz, Y.; Margalit, I.; Joseph, G.; Mandelboim, M.; Lustig, Y.; Regev-Yochay, G.
Show abstract
ObjectiveTo quantify the seasonal burden of acute respiratory viral infections among healthcare workers (HCWs), characterize virologic etiologies, and identify predictors of symptomatic illness and sick leave. MethodsWe conducted a prospective cohort study of HCWs during winter 2024-2025, with weekly surveys capturing acute respiratory infections (ARI) and sick leave. Nasal-throat multiplex PCR swabs were self-collected during symptomatic episodes. Incidence rate ratios (IRRs) for symptomatic episodes and sick days were estimated using Poisson regression; presenteeism was assessed among febrile episodes. ResultsAmong 655 HCWs, 400 (61.1%) reported [≥]1 symptomatic episode. Over 70,861 person-days, incidence rates were 1.34 symptomatic episodes and 0.82 sick days per 100 person-days. Among PCR-confirmed episodes (n=112), rhinovirus (45.5%) and influenza (23.2%) predominated. Female sex was associated with higher rates of symptomatic episodes (IRR 1.38, 95% CI 1.11-1.72) and sick days (IRR 2.55, 95% CI 1.62-4.00), while age >56 years was associated with lower rates of both. During febrile episodes, 38.8% (95% CI 31.5%-46.6%) reported working despite fever. ConclusionsARIs were common among HCWs and frequently resulted in sick leave, yet febrile presenteeism remained substantial, underscoring the need for strengthened respiratory virus prevention and occupational health policies.
Johnson, K. E.; Vega Yon, G.; Brand, S. P. C.; Bernal Zelaya, C.; Bayer, D.; Volkov, I.; Susswein, Z.; Magee, A.; Gostic, K. M.; English, K. M.; Ghinai, I.; Hamlet, A.; Olesen, S. W.; Pulliam, J.; Abbott, S.; Morris, D. H.
Show abstract
Infectious disease forecasts can inform public health decision-making. Wastewater monitoring is a relatively new epidemiological data source with multiple potential applications, including forecasting. Incorporating wastewater data into epidemiological forecasting models is challenging, and relatively few studies have assessed whether this improves forecast performance. We present and evaluate a semi-mechanistic wastewater-informed forecasting model. The model forecasts COVID-19 hospital admissions at the state and territorial levels in the United States, based on incident hospital admissions data and, optionally, SARS-CoV-2 wastewater concentration data from multiple wastewater sampling sites. From February through April 2024, we produced real-time wastewater-informed COVID-19 forecasts using development versions of the model and submitted them to the United States COVID-19 Forecast Hub ("the Hub"). We then published an open-source R package, wwinference, that implements the model with or without wastewater as an input. Using proper scoring rules and measures of model calibration, we assess both our real-time submissions to the Hub and retrospective hypothetical forecasts from wwinference made with and without wastewater data. While the models performed similarly with and without the wastewater signal included, there was substantial heterogeneity for individual locations and dates where wastewater data meaningfully improved or degraded the models forecast performance. Compared to other models submitted to the Hub during the period spanned by our submissions, the real-time wastewater-informed version of our model ranked fourth of 10 models, with the hospital admissions-only version of our model ranking second out of 10 models. Across the 2023-2024 winter epidemic wave, retrospective forecasts from wwinference would have performed similarly with and without the wastewater signal included: fifth and fourth out of 10 models, respectively. To better understand the drivers of differential forecast performance with and without wastewater, we performed an exploratory analysis investigating the relationship between characteristics of the input data and improved and reduced performance in our model. Based on that analysis, we identify and discuss key areas for further model development. To our knowledge, this is the first work that conducts an evaluation of real-time and retrospective infectious disease forecasts across the United States both with and without wastewater data and compared to other forecasting models. Author SummaryWastewater-based epidemiology, in combination with clinical surveillance, has the potential to improve situational awareness and inform outbreak responses. We developed a model that uses data on the pathogen concentration in wastewater from one or more wastewater treatment plants in combination with hospital admissions to produce short-term forecasts of hospital admissions. We produced and submitted forecasts of 28-day ahead COVID-19 hospital admissions from this model to the U.S. COVID-19 Forecast Hub during the spring of 2024 and found that it performed well in comparison to other models during that limited time period. To assess the added value of incorporating wastewater data into the model and to investigate how it would have performed had we submitted it during the entire 2023-2024 winter epidemic wave, we performed a retrospective analysis in which we produced forecasts from the model with and without including wastewater data, using data that would have been available in real-time as of each forecast date. Both versions of the model would have been median overall performers had they been submitted to the Hub throughout the season. When comparing the models performance with and without wastewater data included, we found that overall forecast performance was very similar, with wastewater data slightly reducing overall average forecast performance. Within this result, there was significant heterogeneity, with clear instances of wastewater data improving and detracting from forecast performance. We used trends in the observed data to generate hypotheses as to the drivers of improved and reduced relative forecast performance within our model. We conclude by suggesting future work to improve the model and more broadly the application of wastewater-based epidemiology to forecasting.